fusion 360
RLCAD: Reinforcement Learning Training Gym for Revolution Involved CAD Command Sequence Generation
Yin, Xiaolong, Lu, Xingyu, Shen, Jiahang, Ni, Jingzhe, Li, Hailong, Tong, Ruofeng, Tang, Min, Du, Peng
A CAD command sequence is a typical parametric design paradigm in 3D CAD systems where a model is constructed by overlaying 2D sketches with operations such as extrusion, revolution, and Boolean operations. Although there is growing academic interest in the automatic generation of command sequences, existing methods and datasets only support operations such as 2D sketching, extrusion,and Boolean operations. This limitation makes it challenging to represent more complex geometries. In this paper, we present a reinforcement learning (RL) training environment (gym) built on a CAD geometric engine. Given an input boundary representation (B-Rep) geometry, the policy network in the RL algorithm generates an action. This action, along with previously generated actions, is processed within the gym to produce the corresponding CAD geometry, which is then fed back into the policy network. The rewards, determined by the difference between the generated and target geometries within the gym, are used to update the RL network. Our method supports operations beyond sketches, Boolean, and extrusion, including revolution operations. With this training gym, we achieve state-of-the-art (SOTA) quality in generating command sequences from B-Rep geometries. In addition, our method can significantly improve the efficiency of command sequence generation by a factor of 39X compared with the previous training gym.
- Asia > Middle East > Jordan (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
Image2CADSeq: Computer-Aided Design Sequence and Knowledge Inference from Product Images
Computer-aided design (CAD) tools empower designers to design and modify 3D models through a series of CAD operations, commonly referred to as a CAD sequence. In scenarios where digital CAD files are not accessible, reverse engineering (RE) has been used to reconstruct 3D CAD models. Recent advances have seen the rise of data-driven approaches for RE, with a primary focus on converting 3D data, such as point clouds, into 3D models in boundary representation (B-rep) format. However, obtaining 3D data poses significant challenges, and B-rep models do not reveal knowledge about the 3D modeling process of designs. To this end, our research introduces a novel data-driven approach with an Image2CADSeq neural network model. This model aims to reverse engineer CAD models by processing images as input and generating CAD sequences. These sequences can then be translated into B-rep models using a solid modeling kernel. Unlike B-rep models, CAD sequences offer enhanced flexibility to modify individual steps of model creation, providing a deeper understanding of the construction process of CAD models. To quantitatively and rigorously evaluate the predictive performance of the Image2CADSeq model, we have developed a multi-level evaluation framework for model assessment. The model was trained on a specially synthesized dataset, and various network architectures were explored to optimize the performance. The experimental and validation results show great potential for the model in generating CAD sequences from 2D image data.
- North America > United States > Texas > Travis County > Austin (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia (0.04)
- Research Report > New Finding (1.00)
- Overview (1.00)
Geometric Deep Learning for Computer-Aided Design: A Survey
Heidari, Negar, Iosifidis, Alexandros
Geometric Deep Learning techniques have become a transformative force in the field of Computer-Aided Design (CAD), and have the potential to revolutionize how designers and engineers approach and enhance the design process. By harnessing the power of machine learning-based methods, CAD designers can optimize their workflows, save time and effort while making better informed decisions, and create designs that are both innovative and practical. The ability to process the CAD designs represented by geometric data and to analyze their encoded features enables the identification of similarities among diverse CAD models, the proposition of alternative designs and enhancements, and even the generation of novel design alternatives. This survey offers a comprehensive overview of learning-based methods in computer-aided design across various categories, including similarity analysis and retrieval, 2D and 3D CAD model synthesis, and CAD generation from point clouds. Additionally, it provides a complete list of benchmark datasets and their characteristics, along with open-source codes that have propelled research in this domain. The final discussion delves into the challenges prevalent in this field, followed by potential future research directions in this rapidly evolving field.
- Information Technology > CAD (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Hold 'em and Fold 'em: Towards Human-scale, Feedback-Controlled Soft Origami Robots
Mensah, Immanuel Ampomah, Healey, Jessica, Wu, Celina, Lacunza, Andrea, Hanson, Nathaniel, Dorsey, Kristen L.
An underdeveloped capability in soft robotics is proprioceptive feedback control, where soft actuators can be sensed and controlled using only sensors on the robot's body. Additionally, soft actuators are often unable to support human-scale loads due to the extremely compliant materials in use. Developing both feedback control and the ability to actuate under large loads (e.g. 500 N) are key capacities required to move soft robotics into everyday applications. In this work, we independently demonstrate these key factors towards controlling and actuating human-scale loads: proprioceptive (embodied) feedback control of a soft, pneumatically-actuated origami robot; and actuation of these origami origami robots under a person's weight in an open-loop configuration. In both demonstrations, the actuators are controlled by internal fluidic pressure. Capacitive sensors patterned onto the robot provide position estimation and serve as input to a feedback controller. We demonstrate position control of a single actuator during stepped setpoints and sinusoidal trajectory following, with root mean square error (RMSE) below 4 mm. We also showcase the actuator's potential towards human-scale robotics as an "origami balance board" by joining three actuators into an open-loop controlled system with a platform that varies its height, roll, and pitch. This work contributes to the field of soft robotics by demonstrating closed-loop feedback position control without visual tracking as an input and lightweight, soft actuators that can support a person's weight. The project repository, including videos, CAD files, and ROS code, is available at https://parses-lab.github.io/kresling_control.
- North America > United States > Texas (0.04)
- Asia > Japan > Honshū > Kansai > Hyogo Prefecture > Kobe (0.04)
- Materials (0.69)
- Health & Medicine (0.68)
- Energy (0.67)
Describing Robots from Design to Learning: Towards an Interactive Lifecycle Representation of Robots
Qiu, Nuofan, Wan, Fang, Song, Chaoyang
As autonomous machines capable of interacting with the real world, various types of robots, such as wheeled mobile robots, quadrupedal robots, and humanoid robots, are emerging in domestic, factory, and other environments to collaborate with humans or accomplish tasks independently. The morphology of a robot is the essential factor that most directly affects the robot's configuration space, thereby determining the robot's function [1]. Robot morphology is primarily determined during the design process, thanks to the development of computer-aided design (CAD) technology, which makes it cost-effective, time-saving, and efficient compared to the manufacturing process. Beyond robot morphology, learning has become an essential topic in robotics because it enables robots to achieve complex tasks and, thus, better interact with the environment. However, training robots in hardware may lead to failures or damage, making it expensive and time-consuming.
A knowledge-driven framework for synthesizing designs from modular components
Chaumet, Constantin, Rehof, Jakob, Schuster, Thomas
The third step entails many repetitive and menial tasks, such as inserting parts and creating joints between them. Especially when comparing and implementing design alternatives, this issue is compounded. We propose a use-case agnostic knowledge-driven framework to automate the implementation step. In particular, the framework catalogues the acquired knowledge and the design concept, as well as utilizes Combinatory Logic Synthesis to synthesize concrete design alternatives. This minimizes the effort required to create designs, allowing the design space to be thoroughly explored. We implemented the framework as a plugin for the CAD software Autodesk Fusion 360. We conducted a case study in which robotic arms were synthesized from a set of 28 modular components. Based on the case study, the applicability of the framework is analyzed and discussed.
SECAD-Net: Self-Supervised CAD Reconstruction by Learning Sketch-Extrude Operations
Li, Pu, Guo, Jianwei, Zhang, Xiaopeng, Yan, Dong-ming
Reverse engineering CAD models from raw geometry is a classic but strenuous research problem. Previous learning-based methods rely heavily on labels due to the supervised design patterns or reconstruct CAD shapes that are not easily editable. In this work, we introduce SECAD-Net, an end-to-end neural network aimed at reconstructing compact and easy-to-edit CAD models in a self-supervised manner. Drawing inspiration from the modeling language that is most commonly used in modern CAD software, we propose to learn 2D sketches and 3D extrusion parameters from raw shapes, from which a set of extrusion cylinders can be generated by extruding each sketch from a 2D plane into a 3D body. By incorporating the Boolean operation (i.e., union), these cylinders can be combined to closely approximate the target geometry. We advocate the use of implicit fields for sketch representation, which allows for creating CAD variations by interpolating latent codes in the sketch latent space. Extensive experiments on both ABC and Fusion 360 datasets demonstrate the effectiveness of our method, and show superiority over state-of-the-art alternatives including the closely related method for supervised CAD reconstruction. We further apply our approach to CAD editing and single-view CAD reconstruction. The code is released at https://github.com/BunnySoCrazy/SECAD-Net.
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- Asia > China (0.04)
Material Prediction for Design Automation Using Graph Representation Learning
Bian, Shijie, Grandi, Daniele, Hassani, Kaveh, Sadler, Elliot, Borijin, Bodia, Fernandes, Axel, Wang, Andrew, Lu, Thomas, Otis, Richard, Ho, Nhut, Li, Bingbing
Successful material selection is critical in designing and manufacturing products for design automation. Designers leverage their knowledge and experience to create high-quality designs by selecting the most appropriate materials through performance, manufacturability, and sustainability evaluation. Intelligent tools can help designers with varying expertise by providing recommendations learned from prior designs. To enable this, we introduce a graph representation learning framework that supports the material prediction of bodies in assemblies. We formulate the material selection task as a node-level prediction task over the assembly graph representation of CAD models and tackle it using Graph Neural Networks (GNNs). Evaluations over three experimental protocols performed on the Fusion 360 Gallery dataset indicate the feasibility of our approach, achieving a 0.75 top-3 micro-f1 score. The proposed framework can scale to large datasets and incorporate designers' knowledge into the learning process. These capabilities allow the framework to serve as a recommendation system for design automation and a baseline for future work, narrowing the gap between human designers and intelligent design agents.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > California > Los Angeles County > Northridge (0.04)
- North America > United States > Missouri > St. Louis County > St. Louis (0.04)
- (4 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
SimCURL: Simple Contrastive User Representation Learning from Command Sequences
Chu, Hang, Khasahmadi, Amir Hosein, Willis, Karl D. D., Anderson, Fraser, Mao, Yaoli, Tran, Linh, Matejka, Justin, Vermeulen, Jo
User modeling is crucial to understanding user behavior and essential for improving user experience and personalized recommendations. When users interact with software, vast amounts of command sequences are generated through logging and analytics systems. These command sequences contain clues to the users' goals and intents. However, these data modalities are highly unstructured and unlabeled, making it difficult for standard predictive systems to learn from. We propose SimCURL, a simple yet effective contrastive self-supervised deep learning framework that learns user representation from unlabeled command sequences. Our method introduces a user-session network architecture, as well as session dropout as a novel way of data augmentation. We train and evaluate our method on a real-world command sequence dataset of more than half a billion commands. Our method shows significant improvement over existing methods when the learned representation is transferred to downstream tasks such as experience and expertise classification.
- Questionnaire & Opinion Survey (0.46)
- Research Report (0.40)
- Information Technology > Communications (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
What is Generative Design?
Generative Design is an AI-driven iterative design process that generates several feasible outputs for the given set of rules (i.e. This AI-driven approach brings one or more unique design outcomes which a designer cannot even imagine and yet meets all the design boundaries set by the designer. Generative Design breaks traditional design barriers by completely altering the approach of an AI's design process; meaning that there is more design perception being considered rather than viewing it from the designer's perspective. The designer can finally choose the best fit model from multiple solutions based on their need. Autodesk Fusion 360 also offers one of the best Generative Design workspaces to explore different design outcomes to help Engineers design more quality products in terms of high performance; where the algorithmic AI computation really happens in the cloud.